Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
User cluster partitioning method based on weighted fuzzy clustering in ground-air collaboration scenarios
Tianyu HUANG, Yuanxing LI, Hao CHEN, Zijia GUO, Mingjun WEI
Journal of Computer Applications    2024, 44 (5): 1555-1561.   DOI: 10.11772/j.issn.1001-9081.2023050643
Abstract85)   HTML0)    PDF (1670KB)(73)       Save

To address the user cluster partitioning issue in the deployment strategy of Unmanned Aerial Vehicle (UAV) base stations for auxiliary communication in emergency scenarios, a feature-weighted fuzzy clustering algorithm, named Improved FCM, was proposed by considering both the performance of UAV base stations and user experience. Firstly, to tackle the problem of high computational complexity and convergence difficulty in the partitioning process of user clusters under random distribution conditions, a feature-weighted node data projection algorithm based on distance weighting was introduced according to the performance constraints of signal coverage range and maximum number of served users for each UAV base station. Secondly, to address the effectiveness of user partitioning when the same user falls within the effective ranges of multiple clusters, as well as the maximization of UAV base station resource utilization, a value-weighted algorithm based on user location and UAV base station load balancing was proposed. Experimental results demonstrate that the proposed methods meet the service performance constraints of UAV base stations. Additionally, the deployment scheme based on the proposed methods effectively improves the average load rate and coverage ratio of the system, reaching 0.774 and 0.026 3 respectively, which are higher than those of GFA (Geometric Fractal Analysis), Sp-C (Spectral Clustering), etc.

Table and Figures | Reference | Related Articles | Metrics
Fake review detection algorithm combining Gaussian mixture model and text graph convolutional network
Xing WANG, Guijuan LIU, Zhihao CHEN
Journal of Computer Applications    2024, 44 (2): 360-368.   DOI: 10.11772/j.issn.1001-9081.2023020219
Abstract164)   HTML9)    PDF (4451KB)(115)       Save

For insufficient edge weight window threshold design in Text Graph Convolutional Network (Text GCN), to mine the word association structure more accurately and improve prediction accuracy, a fake review detection algorithm combining Gaussian Mixture Model (GMM) and Text GCN named F-Text GCN was proposed. The edge signal strength of fake reviews that are relatively weak compared to normal reviews in training data size was improved by using GMM nature to separate noise edge weight distributions. Additionally, considering the diversity of information sources, the adjacency matrix was constructed by combing documents, words, reviews and non-text features. Finally, the fake review association structure of the adjacency matrix was extracted through spectral decomposition of Text GCN. Validation experiments were performed on 126 086 actual Chinese reviews collected by a large domestic e-commerce platform. Experimental results show that, for detecting fake reviews, the F1 value of F-Text GCN is 82.92%, outperforming BERT (Bidirectional Encoder Representation from Transformers) and Text CNN by 10.46% and 11.60%, respectively, the F1 of F-Text GCN is 2.94% higher than that of Text GCN. For highly imitated fake reviews which are challenging to detect, F-Text GCN achieves the overall prediction accuracy of 94.71% by secondary detection on the samples that Support Vector Machine (SVM) was difficult to detect, which is 2.91% and 14.54% higher than those of Text GCN and SVM. Based on study findings, lexical interference in consumer decision-making is evident in fake reviews’ second-order graph neighbor structure. This result indicates that the proposed algorithm is especially suitable for extracting long-range word collocation structures and global sentence feature pattern variations for fake reviews detection.

Table and Figures | Reference | Related Articles | Metrics
Lightweight image super-resolution reconstruction network based on Transformer-CNN
Hao CHEN, Zhenping XIA, Cheng CHENG, Xing LIN-LI, Bowen ZHANG
Journal of Computer Applications    2024, 44 (1): 292-299.   DOI: 10.11772/j.issn.1001-9081.2023010048
Abstract458)   HTML17)    PDF (1855KB)(256)       Save

Aiming at the high computational complexity and large memory consumption of the existing super-resolution reconstruction networks, a lightweight image super-resolution reconstruction network based on Transformer-CNN was proposed, which made the super-resolution reconstruction network more suitable to be applied on embedded terminals such as mobile platforms. Firstly, a hybrid block based on Transformer-CNN was proposed, which enhanced the ability of the network to capture local-global depth features. Then, a modified inverted residual block, with special attention to the characteristics of the high-frequency region, was designed, so that the improvement of feature extraction ability and reduction of inference time were realized. Finally, after exploring the best options for activation function, the GELU (Gaussian Error Linear Unit) activation function was adopted to further improve the network performance. Experimental results show that the proposed network can achieve a good balance between image super-resolution performance and network complexity, and reaches inference speed of 91 frame/s on the benchmark dataset Urban100 with scale factor of 4, which is 11 times faster than the excellent network called SwinIR (Image Restoration using Swin transformer), indicates that the proposed network can efficiently reconstruct the textures and details of the image and reduce a significant amount of inference time.

Table and Figures | Reference | Related Articles | Metrics
Text image editing method based on font and character attribute guidance
Jingchao CHEN, Shugong XU, Youdong DING
Journal of Computer Applications    2023, 43 (5): 1416-1421.   DOI: 10.11772/j.issn.1001-9081.2022040520
Abstract246)   HTML4)    PDF (4333KB)(88)       Save

Aiming at the problems of inconsistent text style before and after editing and insufficient readability of the generated new text in text image editing tasks, a text image editing method based on the guidance of font and character attributes was proposed. Firstly, the generation direction of text foreground style was guided by the font attribute classifier combined with font classification, perception and texture losses to improve the consistency of text style before and after editing. Secondly, the accurate generation of text glyphs was guided by the character attribute classifier combined with the character classification loss to reduce text artifacts and generation errors, and improve the readability of generated new text. Finally, the end-to-end fine-tuned training strategy was used to refine the generated results for the entire staged editing model. In the comparison experiments with SRNet (Style Retention Network) and SwapText, the proposed method achieves PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural SIMilarity) of 25.48 dB and 0.842, which are 2.57 dB and 0.055 higher than those of SRNet and 2.11 dB and 0.046 higher than those of SwapText, respectively; the Mean Square Error (MSE) is 0.004 3, which is 0.003 1 and 0.024 lower than that of SRNet and SwapText, respectively. Experimental results show that the proposed method can effectively improve the generation effect of text image editing.

Table and Figures | Reference | Related Articles | Metrics
No-reference image quality assessment algorithm based on saliency deep features
Jia LI, Yuanlin ZHENG, Kaiyang LIAO, Haojie LOU, Shiyu LI, Zehao CHEN
Journal of Computer Applications    2022, 42 (6): 1957-1964.   DOI: 10.11772/j.issn.1001-9081.2021040597
Abstract341)   HTML15)    PDF (1551KB)(137)       Save

Aiming at the universal No-Reference Image Quality Assessment (NR-IQA) algorithms, a new NR-IQA algorithm based on the saliency deep features of the pseudo reference image was proposed. Firstly, based on the distorted image, the corresponding pseudo reference image of the distorted image generated by ConSinGAN model was used as compensation information of the distorted image, thereby making up for the weakness of NR-IQA methods: lacking real reference information. Secondly, the saliency information of the pseudo reference image was extracted, and the pseudo saliency map and the distorted image were input into VGG16 netwok to extract deep features. Finally, the obtained deep features were merged and mapped into the regression network composed of fully connected layers to obtain a quality prediction consistent with human vision.Experiments were conducted on four large public image datasets TID2013, TID2008, CSIQ and LIVE to prove the effectiveness of the proposed algorithm. The results show that the Spearman Rank-Order Correlation Coefficient (SROCC) of the proposed algorithm on the TID2013 dataset is 5 percentage points higher than that of H-IQA (Hallucinated-IQA) algorithm and 14 percentage points higher than that of RankIQA (learning from Rankings for no-reference IQA) algorithm. The proposed algorithm also has stable performance for the single distortion types. Experimental results indicate that the proposed algorithm is superior to the existing mainstream Full-Reference Image Quality Assessment (FR-IQA) and NR-IQA algorithms, and is consistent with human subjective perception performance.

Table and Figures | Reference | Related Articles | Metrics
Cache cooperation strategy for maximizing revenue in mobile edge computing
Yali WANG, Jiachao CHEN, Junna ZHANG
Journal of Computer Applications    2022, 42 (11): 3479-3485.   DOI: 10.11772/j.issn.1001-9081.2022020194
Abstract377)   HTML30)    PDF (1553KB)(115)       Save

Mobile Edge Computing (MEC) can reduce the energy consumption of mobile devices and the delay of users’ acquisition to services by deploying resources in users’ neighborhood; however, most relevant caching studies ignore the regional differences of the services requested by users. A cache cooperation strategy for maximizing revenue was proposed by considering the features of requested content in different regions and the dynamic characteristic of content. Firstly, considering the regional features of user preferences, the base stations were partitioned into several collaborative domains, and the base stations in each collaboration domain was able to serve users with the same preferences. Then, the content popularity in each region was predicted by the Auto?Regressive Integrated Moving Average (ARIMA) model and the similarity of the content. Finally, the cache cooperation problem was transformed into a revenue maximization problem, and the greedy algorithm was used to solve the content placement and replacement problems according to the revenue obtained by content storage. Simulation results showed that compared with the Grouping?based and Hierarchical Collaborative Caching (GHCC) algorithm based on MEC, the proposed algorithm improved the cache hit rate by 28% with lower average transmission delay. It can be seen that the proposed algorithm can effectively improve the cache hit rate and reduce the average transmission delay at the same time.

Table and Figures | Reference | Related Articles | Metrics
Image super-resolution reconstruction based on attention mechanism
WANG Yongjin, ZUO Yu, WU Lian, CUI Zhongwei, ZHAO Chenjie
Journal of Computer Applications    2021, 41 (3): 845-850.   DOI: 10.11772/j.issn.1001-9081.2020060979
Abstract497)      PDF (2394KB)(437)       Save
At present, super-resolution reconstruction of a single image achieves a good effect, but most models achieve the good effect by increasing the number of network layers rather than exploring the correlation between channels. In order to solve this problem, an image super-resolution reconstruction method based on Channel Attention mechanism (CA) and Depthwise Separable Convolution (DSC) was proposed. The multi-path global and local residual learning were adopted by the entire model. Firstly, the shallow feature extraction block was used to extract the features of the input image. Then, the channel attention mechanism was introduced in the deep feature extraction block, and the correlation of the channels was increased by adjusting the weights of the feature graphs of different channels to extract the high-frequency feature information. Finally, a high-resolution image was reconstructed. In order to reduce the huge parameter influence brought by the attention mechanism, the depthwise separable convolution technology was used in the local residual block to greatly reduce the training parameters. Meanwhile, the Adaptive moment estimation (Adam) optimizer was used to accelerate the convergence of the model, so as to improve the algorithm performance. The image reconstruction by the proposed method was carried out on Set5 and Set14 datasets. Experimental results show that the images reconstructed by the proposed method have higher Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity index (SSIM), and the parameters of the proposed model are reduced to 1/26 of that of the depth Residual Channel Attention Network (RCAN) model.
Reference | Related Articles | Metrics
Knowledge base question answering system based on multi-feature semantic matching
ZHAO Xiaohu, ZHAO Chenglong
Journal of Computer Applications    2020, 40 (7): 1873-1878.   DOI: 10.11772/j.issn.1001-9081.2019111895
Abstract470)      PDF (880KB)(697)       Save
The task of Question Answering over Knowledge Base (KBQA) mainly aims at accurately matching natural language question with triples in the Knowledge Base (KB). However, traditional KBQA methods usually focus on entity recognition and predicate matching, and the errors in entity recognition may lead to error propagation and thus fail to get the right answer. To solve the above problem, an end-to-end solution was proposed to directly match the question and triples. This system consists of two parts:candidate triples generation and candidate triples ranking. Firstly, the candidate triples were generated by the BM25 algorithm calculating the correlation between the question and the triples in the knowledge base. Then, Multi-Feature Semantic Matching Model (MFSMM) was used to realize the ranking of the triples, which means the semantic similarity and character similarity were calculated by MFSMM through Bi-directional Long Short Term Memory Network (Bi-LSTM) and Convolutional Neural Network (CNN) respectively, and the triples were ranked by fusion. With NLPCC-ICCPOL 2016 KBQA as the dataset, the average F1 of the proposed system is 80.35%, which is close to the existing best performance.
Reference | Related Articles | Metrics
Microscopic image identification for small-sample Chinese medicinal materials powder based on deep learning
WANG Yiding, HAO Chenyu, LI Yaoli, CAI Shaoqing, YUAN Yuan
Journal of Computer Applications    2020, 40 (5): 1301-1308.   DOI: 10.11772/j.issn.1001-9081.2019091646
Abstract584)      PDF (1619KB)(625)       Save

Aiming at the problems that a wide variety of Chinese medicinal materials have small samples, and it is difficult to classify the vessels of them, an improved convolutional neural network method was proposed based on multi-channel color space and attention mechanism model. Firstly, the multi-channel color space was used to merge the RGB color space with other color spaces into 6 channels as the network input, so that the network was able to learn the characteristic information such as brightness, hue and saturation to make up for the insufficient samples. Secondly, the attention mechanism model was added to the network, in which the two pooling layers were connected tightly by the channel attention model, and the multi-scale cavity convolutions were combined by the spatial attention model, so that the network focused on the key feature information in the small samples. Aiming at 8 774 vessel images of 34 samples collected from Chinese medicinal materials, the experimental results show that by using the multi-channel color space and attention mechanism model method, compared with the original ResNet network, the accuracy is increased by 1.8 percentage points and 3.1 percentage points respectively, and the combination of the two methods increases accuracy by 4.1 percentage points. It can be seen that the proposed method greatly improves the accuracy of small-sample classification.

Reference | Related Articles | Metrics
Integral attack on PICO algorithm based on division property
LIU Zongfu, YUAN Zheng, ZHAO Chenxi, ZHU Liang
Journal of Computer Applications    2020, 40 (10): 2967-2972.   DOI: 10.11772/j.issn.1001-9081.2019122228
Abstract456)      PDF (810KB)(550)       Save
PICO proposed in recent years is a bit-based ultra lightweight block cipher algorithm. The security of this algorithm to resist integral cryptanalysis was evaluated. Firstly, by analyzing the structure of PICO cipher algorithm, a Mixed-Integer Linear Programming (MILP) model of the algorithm was established based on division property. Then, according to the set constraints, the linear inequalities were generated to describe the propagation rules of division property, and the MILP problem was solved with the help of the mathematical software, the success of constructing the integral distinguisher was judged based on the objective function value. Finally, the automatic search of integral distinguisher of PICO algorithm was realized. Experimental results showed that, the 10-round integral distinguisher of PICO algorithm was searched, which is the longest one so far. However, the small number of plaintexts available is not conducive to key recovery. In order to obtain better attack performance, the searched 9-round distinguisher was used to perform 11-round key recovery attack on PICO algorithm. It is shown that the proposed attack can recover 128-bit round key, the data complexity of the attack is 2 63.46, the time complexity is 2 76 11-round encryptions, and the storage complexity is 2 20.
Reference | Related Articles | Metrics
3D-coverage algorithm based on adjustable radius in wireless sensor network
DANG Xiaochao, SHAO Chenguang, HAO Zhanjun
Journal of Computer Applications    2018, 38 (9): 2581-2586.   DOI: 10.11772/j.issn.1001-9081.2018020357
Abstract651)      PDF (1192KB)(349)       Save
For the problem of coverage in 3D Wireless Sensor Network (WSN), this paper introduced a Three-Dimensional Coverage Algorithm based on Adjustable Radius in wireless sensor network (3D-CAAR). Virtual force was used to achieve uniform distribution of nodes in WSN, at the same time, the distance between a sensor node and the target points in the covered area were determined by the radius adjustable coverage mechanism of sensor nodes. An energy consumption threshold was introduced to enable nodes to adjust their radii according to their own situations, thus reducing the overall network energy consumption and improving node utilization rate. Finally, compared with the traditional ECA3D (Exact Covering Algorithm in Three-Dimensional space) and APFA3D (Artificial Potential Field Algorithm in Three-Dimensional space) by experiments, 3D-CAAR can effectively solve the problem of target node coverage in sensor network.
Reference | Related Articles | Metrics
Paging-measurement method for virtual machine process code based on hardware virtualization
CAI Mengjuan, CHEN Xingshu, JIN Xin, ZHAO Cheng, YIN Mingyong
Journal of Computer Applications    2018, 38 (2): 305-309.   DOI: 10.11772/j.issn.1001-9081.2017082167
Abstract432)      PDF (1037KB)(540)       Save
In cloud environment, the code of pivotal business in Virtual Machine (VM) can be modified by malicious software in many ways, which can pose a threat to its stable operation. Traditional measurement systems based on host are liable to be bypassed or attacked. To solve the problem that it is difficult to obtain a complete virtual machine running process code and verify its integrity at Virtual Machine Monitor (VMM) layer, a paging-measurement method based on hardware virtualization was proposed. The Kernel-based Virtual Machine (KVM) was used as the VMM to capture the system calls of virtual machine process in VMM and regarde it as the trigger point of the measurement process; the semantic differences of different virtual machine versions were solved by using relative address offset, then the paging-measurement method could verify the code integrity of running process in virtual machine transparently at VMM layer. The implemented prototype system of VMPMS (Virtual Machine Paging-Measurement System) can effectively measure the virtual machine process code with acceptable performance loss.
Reference | Related Articles | Metrics
Improved K-means clustering algorithm based on multi-dimensional grid space
SHAO Lun, ZHOU Xinzhi, ZHAO Chengping, ZHANG Xu
Journal of Computer Applications    2018, 38 (10): 2850-2855.   DOI: 10.11772/j.issn.1001-9081.2018040830
Abstract407)      PDF (828KB)(286)       Save
K-means algorithm is a widely used clustering algorithm, but the selection of the initial clustering centers in the traditional K-means algorithm is random, which makes the algorithm easily fall into local optimum and causes instability in the clustering result. In order to solve this problem, the idea of multi-dimensional grid space was introduced to the selection of initial clustering center. Firstly, the sample set was mapped to a virtual multi-dimensional grid space structure. Secondly, the sub-grids containing the largest number of samples and being far away from each other were searched as the initial cluster center grids in the space structure. Finally, the mean points of the samples in the initial cluster center grids were calculated as the initial clustering centers. The initial clustering centers chosen by this method are very close to the actual clustering centers, so that the final clustering result can be obtained stably and efficiently. By using computer simulation data set and UCI machine learning data sets to test, both the iterative number and error rate of the improved algorithm are stable, and smaller than the average of the traditional K-means algorithm. The improved algorithm can effectively avoid falling into local optimum and guarantee the stability of clustering result.
Reference | Related Articles | Metrics
Virtual machine file integrity monitoring based on hardware virtualization
ZHAO Cheng, CHEN Xingshu, JIN Xin
Journal of Computer Applications    2017, 37 (2): 388-391.   DOI: 10.11772/j.issn.1001-9081.2017.02.0388
Abstract701)      PDF (807KB)(581)       Save
In order to protect the integrity of the Virtual Machine (VM) sensitive files and make up for the shortcomings such as high performance overhead, low compatibility and poor flexibility in out-of-box monitoring based on the instruction monitoring methods, OFM (Out-of-box File Monitoring) based on hardware virtualization was proposed. In OFM, Kernel-based Virtual Machine (KVM) was used as the virtual machine monitor to dynamically configure sensitive file access control strategy in real-time; in addition, OFM could modify the call table entries related to file operations of virtual machine system to determine the legitimacy of the VM process operation files, and deal with the illegal processes. Unixbench was deployed in a virtual machine to test the performance of OFM. The experimental results demonstrate that OFM outperforms to instruction monitoring methods in file monitoring and has no affect on other types of system calls for virtual machines. Meanwhile, OFM can effectively monitor the integrity of the virtual machine files and provide better compatibility, flexibility and lower performance losses.
Reference | Related Articles | Metrics
Achievable information rate region of multi-source multi-sink multicast network coding
PU Baoxing, ZHU Hongpeng, ZHAO Chenglin
Journal of Computer Applications    2015, 35 (6): 1546-1551.   DOI: 10.11772/j.issn.1001-9081.2015.06.1546
Abstract466)      PDF (915KB)(370)       Save

In order to solve the problem of multi-source multi-sink multicast network coding, an algorithm for computing achievable information rate region and an approach for constructing linear network coding scheme were proposed. Based on the previous studies, the multi-source multi-sink multicast network coding problem was transformed into a specific single-source multicast network coding scenario with a constraint at the source node. By theoretical analyses and formula derivation, the constraint relationship among the multicast rate of source nodes was found out. Then a multi-objective optimization model was constructed to describe the boundary of achievable information rate region. Two methods were presented for solving this model. One was the enumeration method, the other was multi-objective optimization method based on genetic algorithm. The achievable information rate region could be derived from Pareto boundary of the multi-objective optimization model. After assigning the multicast rate of source nodes, the linear network coding scheme could be constructed by figuring out the single-source multicast network coding scenario with a constraint. The simulation results show that the proposed methods can find out the boundary of achievable information rate region including integral points and construct linear network coding scheme.

Reference | Related Articles | Metrics
Tradeoff between multicast rate and number of coding nodes based on network coding
PU Baoxing, ZHAO Chenglin
Journal of Computer Applications    2015, 35 (4): 929-933.   DOI: 10.11772/j.issn.1001-9081.2015.04.0929
Abstract532)      PDF (800KB)(9244)       Save

Based on single-source multicast network coding, in order to explore the relationship between multicast rate and the number of minimal needed coding nodes, by employing the technique of generation and extension of linear network coding, theoretical analysis and formula derivation of the relationship were given. It is concluded that the number of the minimal needed coding nodes monotonously increases with the increasing of multicast rate. A multi-objective optimization model was constructed, which accurately described the quantitative relationship between them. For the sake of solving this model, a search strategy was derived to search all feasible coding schemes. By combining the search strategy with NSGA-II, an algorithm for solving this model was presented. In the case of being required to consider the tradeoff between them, the solution of the model is the basis of choice for determining network coding scheme. The proposed algorithm not only can search whole Pareto set, but also search part Pareto set related with certain feasible multicast rate region given by user with less search cost. The simulation results verify the conclusion of theoretical analysis, and indicate that the proposed algorithm is feasible and efficient.

Reference | Related Articles | Metrics
Micro-blog information diffusion effect based on behavior analysis
QI Chao CHEN Hongchang YU Yan
Journal of Computer Applications    2014, 34 (8): 2404-2408.   DOI: 10.11772/j.issn.1001-9081.2014.08.2404
Abstract303)      PDF (854KB)(587)       Save

The research of dissemination effect of micro-blog message has an important role in improving marketing, strengthening public opinion monitoring and discovering hotspots accurately. Focused on difference between individuals which was not considered previously, this paper proposed a method of predicting scale and depth of retweeting based on behavior analysis. This paper presented a predictive model of retweet behavior with Logistic Regression (LR) algorithm and extracted nine relative features from users, relationship and content. Based on this model, this paper proposed the above predicting method which considered the character of information disseminating along users and iterative statistical analysis of adjacent users step by step. The experimental results on Sina micro-blog dataset show that the accuracy rate of scale and depth prediction approximates 87.1% and 81.6 respectively, which can predict the dissemination effect well.

Reference | Related Articles | Metrics
Digital watermarking scheme of vector animation based on least significant bit algorithm and changed elements
WANG Tao LI Fudan XU Chao CHEN Yan
Journal of Computer Applications    2014, 34 (5): 1304-1308.   DOI: 10.11772/j.issn.1001-9081.2014.05.1304
Abstract143)      PDF (803KB)(255)       Save

For the vacancies on digital watermarking technology based on 2D-vector animation, this paper proposed a blind watermarking scheme which made full use of vector characteristics and the timing characteristics. This scheme adopted color values of adjacent frames in vector animation changed elements as embedded target. And it used Least Significant Bit(LSB) algorithm as embedding/extraction algorithm, which embedded multiple group watermarks to vector animation. Finally the accurate watermark could be obtained by verifying the extracted multiple group watermarks. Theoretical analysis and experimental results show that this scheme is not only easy to implement and well in robustness, but also can realize tamper-proofing. What's more, the vector animation can be played in real-time during the watermark embedding and extraction.

Reference | Related Articles | Metrics
Speech enhancement using subband spectral-subtraction
CAI Yu HAO Chengpeng HOU Caohuan
Journal of Computer Applications    2014, 34 (2): 567-571.  
Abstract374)            Save
A subband spectral-subtraction method for speech enhancement was proposed. Time series were decomposed into frequency subbands through the filterbank and in each subband, an improved spectral-subtraction technique was applied. This method mainly considered that most environmental noises would affect the speech spectrum nonuniformly within different frequencies and multiband processing would be more accurate and effective. In the speech processing experiments, objective evaluation reveals that the proposed algorithm performs well in noise suppression and speech quality improvement, which makes the speech after processing more comfortable and intelligible.
Related Articles | Metrics
On-line forum hot topic mining method based on topic cluster evaluation
JIANG Hao CHEN Xingshu DU Min
Journal of Computer Applications    2013, 33 (11): 3071-3075.  
Abstract539)      PDF (795KB)(393)       Save
Hot topic mining is an important technical foundation for monitoring public opinion. As current hot topic mining methods cannot solve the affection of word noise and have single hot degree evaluation way, a new mining method based on topic cluster evaluation was proposed. After forum data was modeled by Latent Dirichlet Allocation (LDA) topic model and topic noise was cut off, the data were then clustered by improved cluster center selection algorithm K-means++. Finally, clusters were evaluated in three aspects: abruptness, purity and attention degree of topics. The experimental results show that both cluster quality and clustering speed can rise up by setting topic noise threshold to 0.75 and cluster number to 50. The effectiveness of ranking clusters by their probability of the existing hot topic with this method has also been proved on real data sets tests. At last a method was developed for displaying hot topics.
Related Articles | Metrics
Efficient and universal testing method of user interface based on SilkTest and XML
HE Hao CHENG Chunling ZHANG Zhengyu ZHANG Dengyin
Journal of Computer Applications    2013, 33 (01): 258-261.   DOI: 10.3724/SP.J.1087.2013.00258
Abstract721)      PDF (646KB)(462)       Save
In software testing, User Interface (UI) testing plays an important role to ensure software quality and reliability. Concerning the lack of stability and generality in the UI testing method for handle recognition, an improved method of recognizing and testing UI controls based on Extensible Markup Language (XML) was proposed through the introduction of XML. The method used the features of XML which processed data conveniently and combined the automation testing tool SilkTest to improve the traditional UI testing. Concerning the features of multi-language and multi-version in AutoCAD, the automation testing scheme was designed on the basis of the proposed method to test dialog box in a series of AutoCAD products. The experimental results show that the improved method reduces recognition time of the controls and the redundancy of the program. Also it improves the efficiency of the testing and the stability of UI controls recognition.
Reference | Related Articles | Metrics
Research into community structure of resting-state brain functional network
WANG Yan-qun LI Hai-fang GUO Hao CHEN Jun-jie
Journal of Computer Applications    2012, 32 (07): 2044-2048.   DOI: 10.3724/SP.J.1087.2012.02044
Abstract1054)      PDF (815KB)(731)       Save
The community detecting algorithm was applied to human functional network to explore the mechanism of human brain. The brain functional data of 28 healthy subjects were collected by functional Magnetic Resonance Imaging (fMRI), and the brain functional network of human beings based on time series was constructed. A threshold range of vertices in the network was designated according to modularity and full connected network theory. The community structures were detected by using the hierarchical clustering algorithm and the greedy algorithm respectively, and the experimental results show that similar community structures have been obtained. Then different performances can be explored across the threshold by analyzing the modularity. An effective threshold range of vertices between 180 to 320 in brain network was proposed. Exploring the community structure is helpful to comprehend the mechanism of brain lesions, which provides a tool for diagnosis and treatment of brain diseases.
Reference | Related Articles | Metrics
Application of machinery manufacturing decision-making based on ID3 algorithm
LU Zhao CHEN Shi-ping
Journal of Computer Applications    2011, 31 (11): 3087-3090.   DOI: 10.3724/SP.J.1087.2011.03087
Abstract1042)      PDF (617KB)(442)       Save
In order to improve the quality management and enhance the efficiency of decision-making in machinery manufacturing industry, this paper took the typical machinery enterprise’s business as an example and explored the data of management information by Cross-Industry Standard Process for Data Mining (CRISP-DM) standard process. By using the gain ratio calculation method based on ID3 decision tree algorithm, the study generated a decision tree model and made it an initial implementation in the company’s Enterprise Resources Planning (ERP) system. By test and analysis, the result shows that the model can standardize the business process and enhance the decision-making efficiency.
Related Articles | Metrics
Self-adaptive image encryption algorithm based on chaotic map
DENG Shaojiang HUANG Guichao CHEN Zhijian XIAO Xiao
Journal of Computer Applications    2011, 31 (06): 1502-1504.   DOI: 10.3724/SP.J.1087.2011.01502
Abstract1955)      PDF (617KB)(619)       Save
This paper presented a new self-adaptive image encryption algorithm so as to improve its robustness. Under this algorithm, a gray image or color image was divided into 2×2 size blocks. A corresponding size of matrix in the top right corner was created by the pixel gray-scale value of the top left corner under Chebyshev mapping. The gray-scale value of the top right corner block was then replaced by the matrix created before. The remaining blocks were encrypted in the same manner in clockwise until the top left corner block was finally encrypted. This algorithm is not restricted to the size of image and it is suitable to gray images and color images, which leads to better robustness. Meanwhile, the introduction of gray-scale value diffusion system equips this algorithm with powerful function of diffusion and disturbance.
Related Articles | Metrics
Ethnic group evolution algorithm for constrained numerical optimization
Hao CHEN Xiao-ying PAN Du-wu CUI
Journal of Computer Applications    2011, 31 (04): 1090-1093.   DOI: 10.3724/SP.J.1087.2011.01090
Abstract1658)      PDF (556KB)(435)       Save
In order to improve the performance of Ethnic Group Evolution Algorithm (EGEA) for constrained functions, a macrogamete filter mechanism based on linear truncation strategy was proposed to keep macrogamete scale stable in evolution process. This strategy can reduce the hefty fluctuation of ethnic group structure and improve the searching efficiency of EGEA effectively. The simulations of six classical constrained functions show the linear truncation strategy enables EGEA to be a competent algorithm for constrained functions.
Related Articles | Metrics
Method of mutual information filtration with dual-threshold for term extraction
Shi-chao CHEN Bin YU
Journal of Computer Applications    2011, 31 (04): 1070-1073.   DOI: 10.3724/SP.J.1087.2011.01070
Abstract1368)      PDF (789KB)(460)       Save
In order to reduce the impact of problems inherent in the mutual information method on the filtering effect, a method of candidate term filtration and extraction was proposed. And a determination algorithm based on partial evaluating indicator was given, which can give the best upper and lower thresholds fast and accurately through data sampling, statistics and computation. Compared with the method of mutual information filtration with single threshold, the proposed method filtered and extracted candidate terms by setting two thresholds in the premise of not changing the calculating formula of mutual information. The experimental results show that the proposed method can improve the precision rate and F-measurement significantly under the same conditions.
Related Articles | Metrics
Algorithm for solving K-shortest paths problem in complicated network
Liu Jia Shaofang Xia Yanan Lv Lichao Chen
Journal of Computer Applications   
Abstract1432)      PDF (897KB)(1321)       Save
Focusing on the optimization problems about complicated network, an algorithm named KSPA (K-Shortest Paths based on A*) was proposed to solve the K-shortest paths problem in complicated network. The time cost was taken as target function and the establishment of the target function model was given. Experimental results show the KSPA algorithm proposed can be used to solve the K-shortest paths problems quickly in multi-graph.
Related Articles | Metrics

Fault diagnosis design based on qualitative constraint

SHAO Chen-xi,ZHANG Jun-tao,BAI Fang-zhou,ZHANG Yi
Journal of Computer Applications    2005, 25 (07): 1647-1650.   DOI: 10.3724/SP.J.1087.2005.01647
Abstract1093)      PDF (566KB)(810)       Save

Based on the QSIM algorithm of B. J. Kuipers, comparative constraint was presented to reduce the diagnosis space. The transfer regulation of qualitative constrain was used in the simulation and reasoning of system diagnosis. This algorithm took observed faulty state as the beginning, to diagnosis the location and cause of variable discrepancy according to the transfer regulation of qualitative constraint, reasoned from diagnosed faulty to examine the result and deleted the redundancy of diagnosis result. In the example of condensation refrigeration system, the constraint relation was built according to qualitative difference equation. Aim at the bad efficiency of refrigeration, the mixture of air or Freon superfluously was diagnosed as the result faulty source, consistent with the factual system.

Reference | Related Articles | Metrics